Artificial Intelligence Nanodegree

Computer Vision Capstone

Project: Facial Keypoint Detection


Welcome to the final Computer Vision project in the Artificial Intelligence Nanodegree program!

In this project, you’ll combine your knowledge of computer vision techniques and deep learning to build and end-to-end facial keypoint recognition system! Facial keypoints include points around the eyes, nose, and mouth on any face and are used in many applications, from facial tracking to emotion recognition.

There are three main parts to this project:

Part 1 : Investigating OpenCV, pre-processing, and face detection

Part 2 : Training a Convolutional Neural Network (CNN) to detect facial keypoints

Part 3 : Putting parts 1 and 2 together to identify facial keypoints on any image!


*Here's what you need to know to complete the project:

  1. In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested.

    a. Sections that begin with '(IMPLEMENTATION)' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!

  1. In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation.

    a. Each section where you will answer a question is preceded by a 'Question X' header.

    b. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.

The rubric contains optional suggestions for enhancing the project beyond the minimum requirements. If you decide to pursue the "(Optional)" sections, you should include the code in this IPython notebook.

Your project submission will be evaluated based on your answers to each of the questions and the code implementations you provide.

Steps to Complete the Project

Each part of the notebook is further broken down into separate steps. Feel free to use the links below to navigate the notebook.

In this project you will get to explore a few of the many computer vision algorithms built into the OpenCV library. This expansive computer vision library is now almost 20 years old and still growing!

The project itself is broken down into three large parts, then even further into separate steps. Make sure to read through each step, and complete any sections that begin with '(IMPLEMENTATION)' in the header; these implementation sections may contain multiple TODOs that will be marked in code. For convenience, we provide links to each of these steps below.

Part 1 : Investigating OpenCV, pre-processing, and face detection

  • Step 0: Detect Faces Using a Haar Cascade Classifier
  • Step 1: Add Eye Detection
  • Step 2: De-noise an Image for Better Face Detection
  • Step 3: Blur an Image and Perform Edge Detection
  • Step 4: Automatically Hide the Identity of an Individual

Part 2 : Training a Convolutional Neural Network (CNN) to detect facial keypoints

  • Step 5: Create a CNN to Recognize Facial Keypoints
  • Step 6: Compile and Train the Model
  • Step 7: Visualize the Loss and Answer Questions

Part 3 : Putting parts 1 and 2 together to identify facial keypoints on any image!

  • Step 8: Build a Robust Facial Keypoints Detector (Complete the CV Pipeline)

Step 0: Detect Faces Using a Haar Cascade Classifier

Have you ever wondered how Facebook automatically tags images with your friends' faces? Or how high-end cameras automatically find and focus on a certain person's face? Applications like these depend heavily on the machine learning task known as face detection - which is the task of automatically finding faces in images containing people.

At its root face detection is a classification problem - that is a problem of distinguishing between distinct classes of things. With face detection these distinct classes are 1) images of human faces and 2) everything else.

We use OpenCV's implementation of Haar feature-based cascade classifiers to detect human faces in images. OpenCV provides many pre-trained face detectors, stored as XML files on github. We have downloaded one of these detectors and stored it in the detector_architectures directory.

Import Resources

In the next python cell, we load in the required libraries for this section of the project.

In [1]:
# Import required libraries for this section

%matplotlib inline

import numpy as np
import matplotlib.pyplot as plt
import math
import cv2                     # OpenCV library for computer vision
from PIL import Image
import time 

Next, we load in and display a test image for performing face detection.

Note: by default OpenCV assumes the ordering of our image's color channels are Blue, then Green, then Red. This is slightly out of order with most image types we'll use in these experiments, whose color channels are ordered Red, then Green, then Blue. In order to switch the Blue and Red channels of our test image around we will use OpenCV's cvtColor function, which you can read more about by checking out some of its documentation located here. This is a general utility function that can do other transformations too like converting a color image to grayscale, and transforming a standard color image to HSV color space.

In [2]:
# Load in color image for face detection
image = cv2.imread('images/test_image_1.jpg')

# Convert the image to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Plot our image using subplots to specify a size and title
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Original Image')
ax1.imshow(image)
Out[2]:
<matplotlib.image.AxesImage at 0x7f64485a0208>

There are a lot of people - and faces - in this picture. 13 faces to be exact! In the next code cell, we demonstrate how to use a Haar Cascade classifier to detect all the faces in this test image.

This face detector uses information about patterns of intensity in an image to reliably detect faces under varying light conditions. So, to use this face detector, we'll first convert the image from color to grayscale.

Then, we load in the fully trained architecture of the face detector -- found in the file haarcascade_frontalface_default.xml - and use it on our image to find faces!

To learn more about the parameters of the detector see this post.

In [3]:
# Convert the RGB  image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)

# Extract the pre-trained face detector from an xml file
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')

# Detect the faces in image
faces = face_cascade.detectMultiScale(gray, 4, 6)

# Print the number of faces detected in the image
print('Number of faces detected:', len(faces))

# Make a copy of the orginal image to draw face detections on
image_with_detections = np.copy(image)

# Get the bounding box for each detected face
for (x,y,w,h) in faces:
    # Add a red bounding box to the detections image
    cv2.rectangle(image_with_detections, (x,y), (x+w,y+h), (255,0,0), 3)
    

# Display the image with the detections
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Image with Face Detections')
ax1.imshow(image_with_detections)
Number of faces detected: 13
Out[3]:
<matplotlib.image.AxesImage at 0x7f64484f5f98>

In the above code, faces is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as x and y) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as w and h) specify the width and height of the box.


Step 1: Add Eye Detections

There are other pre-trained detectors available that use a Haar Cascade Classifier - including full human body detectors, license plate detectors, and more. A full list of the pre-trained architectures can be found here.

To test your eye detector, we'll first read in a new test image with just a single face.

In [4]:
# Load in color image for face detection
image = cv2.imread('images/james.jpg')

# Convert the image to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Plot the RGB image
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Original Image')
ax1.imshow(image)
Out[4]:
<matplotlib.image.AxesImage at 0x7f64485272b0>

Notice that even though the image is a black and white image, we have read it in as a color image and so it will still need to be converted to grayscale in order to perform the most accurate face detection.

So, the next steps will be to convert this image to grayscale, then load OpenCV's face detector and run it with parameters that detect this face accurately.

In [5]:
# Convert the RGB  image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)

# Extract the pre-trained face detector from an xml file
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')

# Detect the faces in image
faces = face_cascade.detectMultiScale(gray, 1.25, 6)

# Print the number of faces detected in the image
print('Number of faces detected:', len(faces))

# Make a copy of the orginal image to draw face detections on
image_with_detections = np.copy(image)

# Get the bounding box for each detected face
for (x,y,w,h) in faces:
    # Add a red bounding box to the detections image
    cv2.rectangle(image_with_detections, (x,y), (x+w,y+h), (255,0,0), 3)
    

# Display the image with the detections
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Image with Face Detection')
ax1.imshow(image_with_detections)
Number of faces detected: 1
Out[5]:
<matplotlib.image.AxesImage at 0x7f64484d12b0>

(IMPLEMENTATION) Add an eye detector to the current face detection setup.

A Haar-cascade eye detector can be included in the same way that the face detector was and, in this first task, it will be your job to do just this.

To set up an eye detector, use the stored parameters of the eye cascade detector, called haarcascade_eye.xml, located in the detector_architectures subdirectory. In the next code cell, create your eye detector and store its detections.

A few notes before you get started:

First, make sure to give your loaded eye detector the variable name

eye_cascade

and give the list of eye regions you detect the variable name

eyes

Second, since we've already run the face detector over this image, you should only search for eyes within the rectangular face regions detected in faces. This will minimize false detections.

Lastly, once you've run your eye detector over the facial detection region, you should display the RGB image with both the face detection boxes (in red) and your eye detections (in green) to verify that everything works as expected.

In [6]:
# Make a copy of the original image to plot rectangle detections
image_with_detections = np.copy(image)   

# Loop over the detections and draw their corresponding face detection boxes
for (x,y,w,h) in faces:
    cv2.rectangle(image_with_detections, (x,y), (x+w,y+h),(255,0,0), 3)  
    
# Do not change the code above this comment!

    
## TODO: Add eye detection, using haarcascade_eye.xml, to the current face detector algorithm
## TODO: Loop over the eye detections and draw their corresponding boxes in green on image_with_detections

# Pre-trained eye detector from the xml file
eye_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_eye.xml')

# Detect the eyes in image
eyes = eye_cascade.detectMultiScale(gray, 1.03, 2)

## TODO: Loop over the eye detections and draw their corresponding boxes in green on image_with_detections
for (x,y,w,h) in eyes:
    cv2.rectangle(image_with_detections, (x,y), (x+w,y+h),(0,255,0), 3)  


# Plot the image with both faces and eyes detected
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Image with Face and Eye Detection')
ax1.imshow(image_with_detections)
Out[6]:
<matplotlib.image.AxesImage at 0x7f644847f470>

Step 2: De-noise an Image for Better Face Detection

Image quality is an important aspect of any computer vision task. Typically, when creating a set of images to train a deep learning network, significant care is taken to ensure that training images are free of visual noise or artifacts that hinder object detection. While computer vision algorithms - like a face detector - are typically trained on 'nice' data such as this, new test data doesn't always look so nice!

When applying a trained computer vision algorithm to a new piece of test data one often cleans it up first before feeding it in. This sort of cleaning - referred to as pre-processing - can include a number of cleaning phases like blurring, de-noising, color transformations, etc., and many of these tasks can be accomplished using OpenCV.

In this short subsection we explore OpenCV's noise-removal functionality to see how we can clean up a noisy image, which we then feed into our trained face detector.

Create a noisy image to work with

In the next cell, we create an artificial noisy version of the previous multi-face image. This is a little exaggerated - we don't typically get images that are this noisy - but image noise, or 'grainy-ness' in a digitial image - is a fairly common phenomenon.

In [7]:
# Load in the multi-face test image again
image = cv2.imread('images/test_image_1.jpg')

# Convert the image copy to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Make an array copy of this image
image_with_noise = np.asarray(image)

# Create noise - here we add noise sampled randomly from a Gaussian distribution: a common model for noise
noise_level = 40
noise = np.random.randn(image.shape[0],image.shape[1],image.shape[2])*noise_level

# Add this noise to the array image copy
image_with_noise = image_with_noise + noise

# Convert back to uint8 format
image_with_noise = np.asarray([np.uint8(np.clip(i,0,255)) for i in image_with_noise])

# Plot our noisy image!
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Noisy Image')
ax1.imshow(image_with_noise)
Out[7]:
<matplotlib.image.AxesImage at 0x7f64484a3828>

In the context of face detection, the problem with an image like this is that - due to noise - we may miss some faces or get false detections.

In the next cell we apply the same trained OpenCV detector with the same settings as before, to see what sort of detections we get.

In [8]:
# Convert the RGB  image to grayscale
gray_noise = cv2.cvtColor(image_with_noise, cv2.COLOR_RGB2GRAY)

# Extract the pre-trained face detector from an xml file
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')

# Detect the faces in image
faces = face_cascade.detectMultiScale(gray_noise, 4, 6)

# Print the number of faces detected in the image
print('Number of faces detected:', len(faces))

# Make a copy of the orginal image to draw face detections on
image_with_detections = np.copy(image_with_noise)

# Get the bounding box for each detected face
for (x,y,w,h) in faces:
    # Add a red bounding box to the detections image
    cv2.rectangle(image_with_detections, (x,y), (x+w,y+h), (255,0,0), 3)
    

# Display the image with the detections
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Noisy Image with Face Detections')
ax1.imshow(image_with_detections)
Number of faces detected: 13
Out[8]:
<matplotlib.image.AxesImage at 0x7f6448453080>

With this added noise we now miss one of the faces!

(IMPLEMENTATION) De-noise this image for better face detection

Time to get your hands dirty: using OpenCV's built in color image de-noising functionality called fastNlMeansDenoisingColored - de-noise this image enough so that all the faces in the image are properly detected. Once you have cleaned the image in the next cell, use the cell that follows to run our trained face detector over the cleaned image to check out its detections.

You can find its official documentation here and a useful example here.

Note: you can keep all parameters except photo_render fixed as shown in the second link above. Play around with the value of this parameter - see how it affects the resulting cleaned image.

In [9]:
## TODO: Use OpenCV's built in color image de-noising function to clean up our noisy image!
denoised_image = cv2.fastNlMeansDenoisingColored(image_with_noise,None,12,12,6,30)

# Display the image with denoise applied
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('DeNoised Image')
ax1.imshow(denoised_image)
Out[9]:
<matplotlib.image.AxesImage at 0x7f6402a42cc0>
In [27]:
## TODO: Run the face detector on the de-noised image to improve your detections and display the result
# Convert the RGB  image to grayscale
gray_noise_denoise = cv2.cvtColor(denoised_image, cv2.COLOR_RGB2GRAY)

# Detect the faces in image
faces_denoise = face_cascade.detectMultiScale(gray_noise_denoise, 1.3, 4)

# Print the number of faces detected in the image
print('Number of faces detected:', len(faces_denoise))

# Make a copy of the orginal image to draw face detections on
image_with_detections_denoise = np.copy(denoised_image)

# Get the bounding box for each detected face
for (x,y,w,h) in faces_denoise:
    # Add a red bounding box to the detections image
    cv2.rectangle(image_with_detections_denoise, (x,y), (x+w,y+h), (255,0,0), 3)
    

# Display the image with the detections
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('DeNoised Image with Face Detections')
ax1.imshow(image_with_detections_denoise)
Number of faces detected: 13
Out[27]:
<matplotlib.image.AxesImage at 0x7f388f534080>

Step 3: Blur an Image and Perform Edge Detection

Now that we have developed a simple pipeline for detecting faces using OpenCV - let's start playing around with a few fun things we can do with all those detected faces!

Importance of Blur in Edge Detection

Edge detection is a concept that pops up almost everywhere in computer vision applications, as edge-based features (as well as features built on top of edges) are often some of the best features for e.g., object detection and recognition problems.

Edge detection is a dimension reduction technique - by keeping only the edges of an image we get to throw away a lot of non-discriminating information. And typically the most useful kind of edge-detection is one that preserves only the important, global structures (ignoring local structures that aren't very discriminative). So removing local structures / retaining global structures is a crucial pre-processing step to performing edge detection in an image, and blurring can do just that.

Below is an animated gif showing the result of an edge-detected cat taken from Wikipedia, where the image is gradually blurred more and more prior to edge detection. When the animation begins you can't quite make out what it's a picture of, but as the animation evolves and local structures are removed via blurring the cat becomes visible in the edge-detected image.

Edge detection is a convolution performed on the image itself, and you can read about Canny edge detection on this OpenCV documentation page.

Canny edge detection

In the cell below we load in a test image, then apply Canny edge detection on it. The original image is shown on the left panel of the figure, while the edge-detected version of the image is shown on the right. Notice how the result looks very busy - there are too many little details preserved in the image before it is sent to the edge detector. When applied in computer vision applications, edge detection should preserve global structure; doing away with local structures that don't help describe what objects are in the image.

In [11]:
# Load in the image
image = cv2.imread('images/fawzia.jpg')

# Convert to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Convert to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)  

# Perform Canny edge detection
edges = cv2.Canny(gray,100,200)

# Dilate the image to amplify edges
edges = cv2.dilate(edges, None)

# Plot the RGB and edge-detected image
fig = plt.figure(figsize = (15,15))
ax1 = fig.add_subplot(121)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Original Image')
ax1.imshow(image)

ax2 = fig.add_subplot(122)
ax2.set_xticks([])
ax2.set_yticks([])

ax2.set_title('Canny Edges')
ax2.imshow(edges, cmap='gray')
Out[11]:
<matplotlib.image.AxesImage at 0x7f64029e42e8>

Without first blurring the image, and removing small, local structures, a lot of irrelevant edge content gets picked up and amplified by the detector (as shown in the right panel above).

(IMPLEMENTATION) Blur the image then perform edge detection

In the next cell, you will repeat this experiment - blurring the image first to remove these local structures, so that only the important boudnary details remain in the edge-detected image.

Blur the image by using OpenCV's filter2d functionality - which is discussed in this documentation page - and use an averaging kernel of width equal to 4.

In [12]:
### TODO: Blur the test imageusing OpenCV's filter2d functionality, 
# Use an averaging kernel, and a kernel width equal to 4

original_img = np.copy(image)
kernel = np.ones((4,4),np.float32)/16
blur = cv2.filter2D(original_img,-1,kernel)

## TODO: Then perform Canny edge detection and display the output
edges_blur = cv2.Canny(blur,80,300)
# Dilating further amplifies the edges:
edges_blur = cv2.dilate(edges_blur, None)


# Plot the RGB and edge-detected image
fig = plt.figure(figsize = (15,15))
ax1 = fig.add_subplot(121)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Blurred Image')
ax1.imshow(blur)

ax2 = fig.add_subplot(122)
ax2.set_xticks([])
ax2.set_yticks([])

ax2.set_title('Canny Edges')
ax2.imshow(edges_blur, cmap='gray')
Out[12]:
<matplotlib.image.AxesImage at 0x7f6402937cc0>

Step 4: Automatically Hide the Identity of an Individual

If you film something like a documentary or reality TV, you must get permission from every individual shown on film before you can show their face, otherwise you need to blur it out - by blurring the face a lot (so much so that even the global structures are obscured)! This is also true for projects like Google's StreetView maps - an enormous collection of mapping images taken from a fleet of Google vehicles. Because it would be impossible for Google to get the permission of every single person accidentally captured in one of these images they blur out everyone's faces, the detected images must automatically blur the identity of detected people. Here's a few examples of folks caught in the camera of a Google street view vehicle.

Read in an image to perform identity detection

Let's try this out for ourselves. Use the face detection pipeline built above and what you know about using the filter2D to blur and image, and use these in tandem to hide the identity of the person in the following image - loaded in and printed in the next cell.

In [13]:
# Load in the image
image = cv2.imread('images/gus.jpg')

# Convert the image to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Display the image
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('Original Image')
ax1.imshow(image)
Out[13]:
<matplotlib.image.AxesImage at 0x7f64029626a0>

(IMPLEMENTATION) Use blurring to hide the identity of an individual in an image

The idea here is to 1) automatically detect the face in this image, and then 2) blur it out! Make sure to adjust the parameters of the averaging blur filter to completely obscure this person's identity.

In [14]:
## TODO: Implement face detection

# Convert the RGB  image to grayscale:
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)

# Pre-trained face detector from the xml file:
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')

# Detect the faces in image:
faces = face_cascade.detectMultiScale(gray, 1.5, 3)

# Make a copy of the orginal image:
image_with_detections = np.copy(image)

# Get the bounding box for each detected face
for (x,y,w,h) in faces:
    cv2.rectangle(image_with_detections, (x,y), (x+w,y+h), (255,0,0), 3)
    face_crop = image_with_detections[y:y+h, x:x+w]
    

# Display the image with the detections
fig = plt.figure(figsize = (15,15))
ax1 = fig.add_subplot(121)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Image with Face Detection')
ax1.imshow(image_with_detections)


## TODO: Blur the bounding box around each detected face using an averaging filter and display the result
final_image = np.copy(image)
kernel_B = np.ones((60,60),np.float32)/3600
blur_B = cv2.filter2D(face_crop,-1,kernel_B)
final_image[y:y+blur_B.shape[0], x:x+blur_B.shape[1]] = blur_B

ax2 = fig.add_subplot(122)
ax2.set_xticks([])
ax2.set_yticks([])

ax2.set_title('Final Blurred Image')
ax2.imshow(final_image)
Out[14]:
<matplotlib.image.AxesImage at 0x7f64028b91d0>

Step 5: Create a CNN to Recognize Facial Keypoints

OpenCV is often used in practice with other machine learning and deep learning libraries to produce interesting results. In this stage of the project you will create your own end-to-end pipeline - employing convolutional networks in keras along with OpenCV - to apply a "selfie" filter to streaming video and images.

You will start by creating and then training a convolutional network that can detect facial keypoints in a small dataset of cropped images of human faces. We then guide you towards OpenCV to expanding your detection algorithm to more general images. What are facial keypoints? Let's take a look at some examples.

Facial keypoints (also called facial landmarks) are the small blue-green dots shown on each of the faces in the image above - there are 15 keypoints marked in each image. They mark important areas of the face - the eyes, corners of the mouth, the nose, etc. Facial keypoints can be used in a variety of machine learning applications from face and emotion recognition to commercial applications like the image filters popularized by Snapchat.

Below we illustrate a filter that, using the results of this section, automatically places sunglasses on people in images (using the facial keypoints to place the glasses correctly on each face). Here, the facial keypoints have been colored lime green for visualization purposes.

Make a facial keypoint detector

But first things first: how can we make a facial keypoint detector? Well, at a high level, notice that facial keypoint detection is a regression problem. A single face corresponds to a set of 15 facial keypoints (a set of 15 corresponding $(x, y)$ coordinates, i.e., an output point). Because our input data are images, we can employ a convolutional neural network to recognize patterns in our images and learn how to identify these keypoint given sets of labeled data.

In order to train a regressor, we need a training set - a set of facial image / facial keypoint pairs to train on. For this we will be using this dataset from Kaggle. We've already downloaded this data and placed it in the data directory. Make sure that you have both the training and test data files. The training dataset contains several thousand $96 \times 96$ grayscale images of cropped human faces, along with each face's 15 corresponding facial keypoints (also called landmarks) that have been placed by hand, and recorded in $(x, y)$ coordinates. This wonderful resource also has a substantial testing set, which we will use in tinkering with our convolutional network.

To load in this data, run the Python cell below - notice we will load in both the training and testing sets.

The load_data function is in the included utils.py file.

In [15]:
from utils import *

# Load training set
X_train, y_train = load_data()
print("X_train.shape == {}".format(X_train.shape))
print("y_train.shape == {}; y_train.min == {:.3f}; y_train.max == {:.3f}".format(
    y_train.shape, y_train.min(), y_train.max()))

# Load testing set
X_test, _ = load_data(test=True)
print("X_test.shape == {}".format(X_test.shape))
Using TensorFlow backend.
X_train.shape == (2140, 96, 96, 1)
y_train.shape == (2140, 30); y_train.min == -0.920; y_train.max == 0.996
X_test.shape == (1783, 96, 96, 1)

The load_data function in utils.py originates from this excellent blog post, which you are strongly encouraged to read. Please take the time now to review this function. Note how the output values - that is, the coordinates of each set of facial landmarks - have been normalized to take on values in the range $[-1, 1]$, while the pixel values of each input point (a facial image) have been normalized to the range $[0,1]$.

Note: the original Kaggle dataset contains some images with several missing keypoints. For simplicity, the load_data function removes those images with missing labels from the dataset. As an optional extension, you are welcome to amend the load_data function to include the incomplete data points.

Visualize the Training Data

Execute the code cell below to visualize a subset of the training data.

In [16]:
import matplotlib.pyplot as plt
%matplotlib inline

fig = plt.figure(figsize=(20,20))
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
for i in range(9):
    ax = fig.add_subplot(3, 3, i + 1, xticks=[], yticks=[])
    plot_data(X_train[i], y_train[i], ax)

For each training image, there are two landmarks per eyebrow (four total), three per eye (six total), four for the mouth, and one for the tip of the nose.

Review the plot_data function in utils.py to understand how the 30-dimensional training labels in y_train are mapped to facial locations, as this function will prove useful for your pipeline.

(IMPLEMENTATION) Specify the CNN Architecture

In this section, you will specify a neural network for predicting the locations of facial keypoints. Use the code cell below to specify the architecture of your neural network. We have imported some layers that you may find useful for this task, but if you need to use more Keras layers, feel free to import them in the cell.

Your network should accept a $96 \times 96$ grayscale image as input, and it should output a vector with 30 entries, corresponding to the predicted (horizontal and vertical) locations of 15 facial keypoints. If you are not sure where to start, you can find some useful starting architectures in this blog, but you are not permitted to copy any of the architectures that you find online.

In [17]:
# Import deep learning resources from Keras
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Dropout
from keras.layers import Flatten, Dense

## TODO: Specify a CNN architecture
# Your model should accept 96x96 pixel graysale images in
# It should have a fully-connected output layer with 30 values (2 for each facial keypoint)

model = Sequential()
model.add(Conv2D(filters=16, kernel_size=3, activation='relu', input_shape=(96, 96, 1)))
model.add(MaxPooling2D(pool_size=2))

model.add(Conv2D(filters=32, kernel_size=3, activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))

model.add(Conv2D(filters=64, kernel_size=3, activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))

model.add(Conv2D(filters=128, kernel_size=3, activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))

model.add(Conv2D(filters=256, kernel_size=3, activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))

model.add(Flatten())

model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))


model.add(Dense(30))



# Summarize the model
model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 94, 94, 16)        160       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 47, 47, 16)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 45, 45, 32)        4640      
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 22, 22, 32)        0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 22, 22, 32)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 20, 20, 64)        18496     
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 10, 10, 64)        0         
_________________________________________________________________
dropout_2 (Dropout)          (None, 10, 10, 64)        0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 8, 8, 128)         73856     
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 4, 4, 128)         0         
_________________________________________________________________
dropout_3 (Dropout)          (None, 4, 4, 128)         0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 2, 2, 256)         295168    
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 1, 1, 256)         0         
_________________________________________________________________
dropout_4 (Dropout)          (None, 1, 1, 256)         0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 256)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 512)               131584    
_________________________________________________________________
dropout_5 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 30)                15390     
=================================================================
Total params: 539,294
Trainable params: 539,294
Non-trainable params: 0
_________________________________________________________________

Step 6: Compile and Train the Model

After specifying your architecture, you'll need to compile and train the model to detect facial keypoints'

(IMPLEMENTATION) Compile and Train the Model

Use the compile method to configure the learning process. Experiment with your choice of optimizer; you may have some ideas about which will work best (SGD vs. RMSprop, etc), but take the time to empirically verify your theories.

Use the fit method to train the model. Break off a validation set by setting validation_split=0.2. Save the returned History object in the history variable.

Experiment with your model to minimize the validation loss (measured as mean squared error). A very good model will achieve about 0.0015 loss (though it's possible to do even better). When you have finished training, save your model as an HDF5 file with file path my_model.h5.

In [18]:
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint, History

hist = History()
epochs = 100
batch_size = 128

checkpointer = ModelCheckpoint(filepath='weights.final.hdf5', 
                               verbose=1, save_best_only=True)

## TODO: Compile the model
model.compile(optimizer='adam', loss='mse', metrics=['accuracy'])

# Fit the model:
hist_final = model.fit(X_train, y_train, validation_split=0.2,
          epochs=epochs, batch_size=batch_size, callbacks=[checkpointer, hist], verbose=1)


## TODO: Save the model as model.h5
model.save('my_model.h5')
Train on 1712 samples, validate on 428 samples
Epoch 1/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0483 - acc: 0.3510Epoch 00000: val_loss improved from inf to 0.04456, saving model to weights.final.hdf5
1712/1712 [==============================] - 2s - loss: 0.0473 - acc: 0.3581 - val_loss: 0.0446 - val_acc: 0.6963
Epoch 2/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0127 - acc: 0.5102Epoch 00001: val_loss improved from 0.04456 to 0.02644, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0126 - acc: 0.5064 - val_loss: 0.0264 - val_acc: 0.6963
Epoch 3/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0081 - acc: 0.5913Epoch 00002: val_loss improved from 0.02644 to 0.01632, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0081 - acc: 0.5929 - val_loss: 0.0163 - val_acc: 0.6963
Epoch 4/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0068 - acc: 0.6394Epoch 00003: val_loss improved from 0.01632 to 0.01298, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0069 - acc: 0.6384 - val_loss: 0.0130 - val_acc: 0.6963
Epoch 5/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0064 - acc: 0.6569Epoch 00004: val_loss improved from 0.01298 to 0.01021, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0064 - acc: 0.6571 - val_loss: 0.0102 - val_acc: 0.6963
Epoch 6/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0061 - acc: 0.6749Epoch 00005: val_loss improved from 0.01021 to 0.00718, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0060 - acc: 0.6729 - val_loss: 0.0072 - val_acc: 0.6963
Epoch 7/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0058 - acc: 0.6893Epoch 00006: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0058 - acc: 0.6910 - val_loss: 0.0075 - val_acc: 0.6963
Epoch 8/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0057 - acc: 0.6755Epoch 00007: val_loss improved from 0.00718 to 0.00676, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0057 - acc: 0.6770 - val_loss: 0.0068 - val_acc: 0.6963
Epoch 9/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0055 - acc: 0.6707Epoch 00008: val_loss improved from 0.00676 to 0.00666, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0055 - acc: 0.6746 - val_loss: 0.0067 - val_acc: 0.6963
Epoch 10/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0054 - acc: 0.6941Epoch 00009: val_loss improved from 0.00666 to 0.00661, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0054 - acc: 0.6928 - val_loss: 0.0066 - val_acc: 0.6963
Epoch 11/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0053 - acc: 0.6971Epoch 00010: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0053 - acc: 0.6963 - val_loss: 0.0067 - val_acc: 0.6963
Epoch 12/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0053 - acc: 0.6989Epoch 00011: val_loss improved from 0.00661 to 0.00527, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0053 - acc: 0.6992 - val_loss: 0.0053 - val_acc: 0.6963
Epoch 13/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0051 - acc: 0.7037Epoch 00012: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0051 - acc: 0.7004 - val_loss: 0.0055 - val_acc: 0.6963
Epoch 14/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0050 - acc: 0.6893Epoch 00013: val_loss improved from 0.00527 to 0.00461, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0050 - acc: 0.6910 - val_loss: 0.0046 - val_acc: 0.6963
Epoch 15/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0049 - acc: 0.7067Epoch 00014: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0050 - acc: 0.7044 - val_loss: 0.0049 - val_acc: 0.6963
Epoch 16/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0049 - acc: 0.6977Epoch 00015: val_loss improved from 0.00461 to 0.00449, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0048 - acc: 0.6968 - val_loss: 0.0045 - val_acc: 0.6963
Epoch 17/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0047 - acc: 0.7013Epoch 00016: val_loss improved from 0.00449 to 0.00430, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0048 - acc: 0.7004 - val_loss: 0.0043 - val_acc: 0.6963
Epoch 18/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0047 - acc: 0.7031Epoch 00017: val_loss improved from 0.00430 to 0.00406, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0047 - acc: 0.7033 - val_loss: 0.0041 - val_acc: 0.6963
Epoch 19/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0046 - acc: 0.7013Epoch 00018: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0046 - acc: 0.7015 - val_loss: 0.0041 - val_acc: 0.6963
Epoch 20/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0046 - acc: 0.7031Epoch 00019: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0045 - acc: 0.7033 - val_loss: 0.0041 - val_acc: 0.6963
Epoch 21/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0044 - acc: 0.7055Epoch 00020: val_loss improved from 0.00406 to 0.00382, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0044 - acc: 0.7068 - val_loss: 0.0038 - val_acc: 0.6963
Epoch 22/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0042 - acc: 0.7007Epoch 00021: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0042 - acc: 0.7004 - val_loss: 0.0039 - val_acc: 0.6963
Epoch 23/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0041 - acc: 0.6977Epoch 00022: val_loss improved from 0.00382 to 0.00332, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0041 - acc: 0.7004 - val_loss: 0.0033 - val_acc: 0.6963
Epoch 24/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0039 - acc: 0.7007Epoch 00023: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0039 - acc: 0.6998 - val_loss: 0.0035 - val_acc: 0.6963
Epoch 25/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0037 - acc: 0.6941Epoch 00024: val_loss improved from 0.00332 to 0.00318, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0037 - acc: 0.6939 - val_loss: 0.0032 - val_acc: 0.6986
Epoch 26/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0035 - acc: 0.7013Epoch 00025: val_loss improved from 0.00318 to 0.00289, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0035 - acc: 0.7021 - val_loss: 0.0029 - val_acc: 0.6986
Epoch 27/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0034 - acc: 0.6995Epoch 00026: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0034 - acc: 0.6968 - val_loss: 0.0031 - val_acc: 0.7033
Epoch 28/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0032 - acc: 0.6971Epoch 00027: val_loss improved from 0.00289 to 0.00247, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0032 - acc: 0.6957 - val_loss: 0.0025 - val_acc: 0.7033
Epoch 29/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0032 - acc: 0.7001Epoch 00028: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0032 - acc: 0.7015 - val_loss: 0.0029 - val_acc: 0.6963
Epoch 30/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0030 - acc: 0.6995Epoch 00029: val_loss improved from 0.00247 to 0.00243, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0030 - acc: 0.7021 - val_loss: 0.0024 - val_acc: 0.7103
Epoch 31/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0029 - acc: 0.6905Epoch 00030: val_loss improved from 0.00243 to 0.00222, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0029 - acc: 0.6881 - val_loss: 0.0022 - val_acc: 0.7033
Epoch 32/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0028 - acc: 0.7061Epoch 00031: val_loss improved from 0.00222 to 0.00203, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0028 - acc: 0.7062 - val_loss: 0.0020 - val_acc: 0.7173
Epoch 33/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0027 - acc: 0.7163Epoch 00032: val_loss improved from 0.00203 to 0.00197, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0028 - acc: 0.7167 - val_loss: 0.0020 - val_acc: 0.7056
Epoch 34/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0027 - acc: 0.7157Epoch 00033: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0027 - acc: 0.7150 - val_loss: 0.0022 - val_acc: 0.7173
Epoch 35/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0026 - acc: 0.7188Epoch 00034: val_loss improved from 0.00197 to 0.00189, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0026 - acc: 0.7190 - val_loss: 0.0019 - val_acc: 0.7056
Epoch 36/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0025 - acc: 0.7157Epoch 00035: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0025 - acc: 0.7155 - val_loss: 0.0019 - val_acc: 0.7079
Epoch 37/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0024 - acc: 0.6989Epoch 00036: val_loss improved from 0.00189 to 0.00184, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0024 - acc: 0.7015 - val_loss: 0.0018 - val_acc: 0.7103
Epoch 38/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0023 - acc: 0.7338Epoch 00037: val_loss improved from 0.00184 to 0.00181, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0023 - acc: 0.7331 - val_loss: 0.0018 - val_acc: 0.7196
Epoch 39/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0023 - acc: 0.7169Epoch 00038: val_loss improved from 0.00181 to 0.00157, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0023 - acc: 0.7173 - val_loss: 0.0016 - val_acc: 0.7103
Epoch 40/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0023 - acc: 0.7350Epoch 00039: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0023 - acc: 0.7331 - val_loss: 0.0016 - val_acc: 0.7243
Epoch 41/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0022 - acc: 0.7344Epoch 00040: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0022 - acc: 0.7354 - val_loss: 0.0017 - val_acc: 0.7103
Epoch 42/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0022 - acc: 0.7314Epoch 00041: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0022 - acc: 0.7296 - val_loss: 0.0017 - val_acc: 0.7103
Epoch 43/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0021 - acc: 0.7242Epoch 00042: val_loss improved from 0.00157 to 0.00151, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0021 - acc: 0.7249 - val_loss: 0.0015 - val_acc: 0.7220
Epoch 44/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0021 - acc: 0.7386Epoch 00043: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0021 - acc: 0.7371 - val_loss: 0.0016 - val_acc: 0.7220
Epoch 45/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0020 - acc: 0.7284Epoch 00044: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0020 - acc: 0.7307 - val_loss: 0.0015 - val_acc: 0.7173
Epoch 46/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0020 - acc: 0.7392Epoch 00045: val_loss improved from 0.00151 to 0.00140, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0020 - acc: 0.7383 - val_loss: 0.0014 - val_acc: 0.7220
Epoch 47/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7446Epoch 00046: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0019 - acc: 0.7442 - val_loss: 0.0017 - val_acc: 0.7383
Epoch 48/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0020 - acc: 0.7458Epoch 00047: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0020 - acc: 0.7447 - val_loss: 0.0014 - val_acc: 0.7196
Epoch 49/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7548Epoch 00048: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0019 - acc: 0.7535 - val_loss: 0.0015 - val_acc: 0.7407
Epoch 50/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7458Epoch 00049: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0019 - acc: 0.7436 - val_loss: 0.0015 - val_acc: 0.7453
Epoch 51/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7404Epoch 00050: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0018 - acc: 0.7371 - val_loss: 0.0016 - val_acc: 0.7500
Epoch 52/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7482Epoch 00051: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0019 - acc: 0.7500 - val_loss: 0.0016 - val_acc: 0.7477
Epoch 53/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7320Epoch 00052: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0018 - acc: 0.7331 - val_loss: 0.0016 - val_acc: 0.7243
Epoch 54/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7410Epoch 00053: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0018 - acc: 0.7407 - val_loss: 0.0016 - val_acc: 0.7430
Epoch 55/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7470Epoch 00054: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0018 - acc: 0.7494 - val_loss: 0.0014 - val_acc: 0.7477
Epoch 56/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7416Epoch 00055: val_loss improved from 0.00140 to 0.00134, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0017 - acc: 0.7442 - val_loss: 0.0013 - val_acc: 0.7477
Epoch 57/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7374Epoch 00056: val_loss improved from 0.00134 to 0.00124, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0017 - acc: 0.7383 - val_loss: 0.0012 - val_acc: 0.7500
Epoch 58/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7458Epoch 00057: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0017 - acc: 0.7436 - val_loss: 0.0012 - val_acc: 0.7523
Epoch 59/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7392Epoch 00058: val_loss improved from 0.00124 to 0.00123, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0016 - acc: 0.7418 - val_loss: 0.0012 - val_acc: 0.7477
Epoch 60/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0016 - acc: 0.7524Epoch 00059: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0016 - acc: 0.7494 - val_loss: 0.0013 - val_acc: 0.7593
Epoch 61/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0016 - acc: 0.7482Epoch 00060: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0016 - acc: 0.7477 - val_loss: 0.0015 - val_acc: 0.7523
Epoch 62/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0016 - acc: 0.7578Epoch 00061: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0016 - acc: 0.7553 - val_loss: 0.0013 - val_acc: 0.7617
Epoch 63/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0016 - acc: 0.7602Epoch 00062: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0016 - acc: 0.7588 - val_loss: 0.0014 - val_acc: 0.7617
Epoch 64/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0016 - acc: 0.7578Epoch 00063: val_loss improved from 0.00123 to 0.00120, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0016 - acc: 0.7582 - val_loss: 0.0012 - val_acc: 0.7500
Epoch 65/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0016 - acc: 0.7488Epoch 00064: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0016 - acc: 0.7506 - val_loss: 0.0013 - val_acc: 0.7664
Epoch 66/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7596Epoch 00065: val_loss improved from 0.00120 to 0.00114, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0015 - acc: 0.7576 - val_loss: 0.0011 - val_acc: 0.7664
Epoch 67/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7638Epoch 00066: val_loss improved from 0.00114 to 0.00113, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0015 - acc: 0.7623 - val_loss: 0.0011 - val_acc: 0.7617
Epoch 68/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7482Epoch 00067: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0015 - acc: 0.7500 - val_loss: 0.0012 - val_acc: 0.7617
Epoch 69/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7500Epoch 00068: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0015 - acc: 0.7506 - val_loss: 0.0012 - val_acc: 0.7640
Epoch 70/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7626Epoch 00069: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0015 - acc: 0.7634 - val_loss: 0.0012 - val_acc: 0.7617
Epoch 71/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7692Epoch 00070: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0015 - acc: 0.7722 - val_loss: 0.0012 - val_acc: 0.7430
Epoch 72/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7680Epoch 00071: val_loss improved from 0.00113 to 0.00109, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0015 - acc: 0.7675 - val_loss: 0.0011 - val_acc: 0.7547
Epoch 73/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7560Epoch 00072: val_loss improved from 0.00109 to 0.00109, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0014 - acc: 0.7582 - val_loss: 0.0011 - val_acc: 0.7757
Epoch 74/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7704Epoch 00073: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0015 - acc: 0.7716 - val_loss: 0.0011 - val_acc: 0.7617
Epoch 75/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7674Epoch 00074: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0014 - acc: 0.7669 - val_loss: 0.0012 - val_acc: 0.7804
Epoch 76/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7620Epoch 00075: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0014 - acc: 0.7623 - val_loss: 0.0012 - val_acc: 0.7617
Epoch 77/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7626Epoch 00076: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0014 - acc: 0.7617 - val_loss: 0.0012 - val_acc: 0.7593
Epoch 78/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7632Epoch 00077: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0014 - acc: 0.7646 - val_loss: 0.0012 - val_acc: 0.7547
Epoch 79/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7837Epoch 00078: val_loss improved from 0.00109 to 0.00104, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0014 - acc: 0.7856 - val_loss: 0.0010 - val_acc: 0.7850
Epoch 80/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7680Epoch 00079: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0014 - acc: 0.7634 - val_loss: 0.0011 - val_acc: 0.7593
Epoch 81/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7716Epoch 00080: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0013 - acc: 0.7728 - val_loss: 0.0012 - val_acc: 0.7687
Epoch 82/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7674Epoch 00081: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0014 - acc: 0.7693 - val_loss: 0.0011 - val_acc: 0.7617
Epoch 83/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7680Epoch 00082: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0013 - acc: 0.7687 - val_loss: 0.0011 - val_acc: 0.7780
Epoch 84/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7680Epoch 00083: val_loss improved from 0.00104 to 0.00103, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0013 - acc: 0.7687 - val_loss: 0.0010 - val_acc: 0.7617
Epoch 85/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7831Epoch 00084: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0014 - acc: 0.7827 - val_loss: 0.0011 - val_acc: 0.7757
Epoch 86/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7812Epoch 00085: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0013 - acc: 0.7786 - val_loss: 0.0011 - val_acc: 0.7827
Epoch 87/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7734Epoch 00086: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0013 - acc: 0.7734 - val_loss: 0.0011 - val_acc: 0.7804
Epoch 88/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7812Epoch 00087: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0013 - acc: 0.7821 - val_loss: 0.0011 - val_acc: 0.7687
Epoch 89/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7873Epoch 00088: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0013 - acc: 0.7874 - val_loss: 0.0011 - val_acc: 0.7827
Epoch 90/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7764Epoch 00089: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0013 - acc: 0.7775 - val_loss: 0.0011 - val_acc: 0.7827
Epoch 91/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7632Epoch 00090: val_loss improved from 0.00103 to 0.00101, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0013 - acc: 0.7646 - val_loss: 0.0010 - val_acc: 0.7850
Epoch 92/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7879Epoch 00091: val_loss improved from 0.00101 to 0.00099, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0013 - acc: 0.7874 - val_loss: 9.9419e-04 - val_acc: 0.7780
Epoch 93/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7788Epoch 00092: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0012 - acc: 0.7792 - val_loss: 0.0011 - val_acc: 0.7850
Epoch 94/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7897Epoch 00093: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0012 - acc: 0.7921 - val_loss: 0.0010 - val_acc: 0.7780
Epoch 95/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7849Epoch 00094: val_loss improved from 0.00099 to 0.00098, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0012 - acc: 0.7815 - val_loss: 9.8428e-04 - val_acc: 0.7967
Epoch 96/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7812Epoch 00095: val_loss improved from 0.00098 to 0.00095, saving model to weights.final.hdf5
1712/1712 [==============================] - 1s - loss: 0.0012 - acc: 0.7804 - val_loss: 9.5388e-04 - val_acc: 0.8037
Epoch 97/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7788Epoch 00096: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0012 - acc: 0.7792 - val_loss: 9.6836e-04 - val_acc: 0.7850
Epoch 98/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7909Epoch 00097: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0012 - acc: 0.7897 - val_loss: 9.7120e-04 - val_acc: 0.7944
Epoch 99/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7782Epoch 00098: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0012 - acc: 0.7798 - val_loss: 0.0010 - val_acc: 0.7850
Epoch 100/100
1664/1712 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7740Epoch 00099: val_loss did not improve
1712/1712 [==============================] - 1s - loss: 0.0012 - acc: 0.7734 - val_loss: 9.8383e-04 - val_acc: 0.8061

Step 7: Visualize the Loss and Test Predictions

(IMPLEMENTATION) Answer a few questions and visualize the loss

Question 1: Outline the steps you took to get to your final neural network architecture and your reasoning at each step.

Answer: I went for the general idea of increasing the layer density at each subsequent layer. I started with a layer size of 16 and went to 512. The training looked fast, so I ran it for 100 epochs which led to an accuracy of around 80%. I used an ADAM optimizer which I had had good results with previously. I also used an 'mse' (Mean Squared Error). From the video lectures, that's a standard choice for image/video processing. I picked a dropout of 0.2 which seemed to work. I don't know if there are systematic ways of picking the number. I experimented with different batch sizes, but I went with a slightly larger batch size of 128 since the training speed was good.

Question 2: Defend your choice of optimizer. Which optimizers did you test, and how did you determine which worked best?

Answer: I only tried the ADAM optimizer and the GradientDescentOptimizer. ADAM optimizer performed better. This is because the algorithm used (i.e., by Kingma and Ba) uses moving averages (i.e. momentum). This enables ADAM to use a larger 'effective' step size which results in faster training. Even though ADAM will need larger computation at each step (i.e. to keep track of the moving average, and store all states), it requires less hyperparameter tuning. The results by ADAM were good so I didn't experiment any further.

Use the code cell below to plot the training and validation loss of your neural network. You may find this resource useful.

In [20]:
## TODO: Visualize the training and validation loss of your neural network
# Visualize the training and validation loss of the neural network
plt.plot(range(epochs), hist_final.history[
         'val_loss'], 'b-', label='Val Loss')
plt.plot(range(epochs), hist_final.history[
         'loss'], 'r--', label='Train Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()

Question 3: Do you notice any evidence of overfitting or underfitting in the above plot? If so, what steps have you taken to improve your model? Note that slight overfitting or underfitting will not hurt your chances of a successful submission, as long as you have attempted some solutions towards improving your model (such as regularization, dropout, increased/decreased number of layers, etc).

Answer: I don't see any evidence of over fitting. I did use droput to regularize during training. As we can see in the above plot, the validation loss matches well with the training loss. Also, both are monotonically decreasing which is a sign it's not overfitting. It appears around 60-80 epochs would've been sufficient to get most the way, but I just ran it for 100 as the computaitons were very fast.

Visualize a Subset of the Test Predictions

Execute the code cell below to visualize your model's predicted keypoints on a subset of the testing images.

In [21]:
y_test = model.predict(X_test)
fig = plt.figure(figsize=(20,20))
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
for i in range(9):
    ax = fig.add_subplot(3, 3, i + 1, xticks=[], yticks=[])
    plot_data(X_test[i], y_test[i], ax)

Step 8: Complete the pipeline

With the work you did in Sections 1 and 2 of this notebook, along with your freshly trained facial keypoint detector, you can now complete the full pipeline. That is given a color image containing a person or persons you can now

  • Detect the faces in this image automatically using OpenCV
  • Predict the facial keypoints in each face detected in the image
  • Paint predicted keypoints on each face detected

In this Subsection you will do just this!

(IMPLEMENTATION) Facial Keypoints Detector

Use the OpenCV face detection functionality you built in previous Sections to expand the functionality of your keypoints detector to color images with arbitrary size. Your function should perform the following steps

  1. Accept a color image.
  2. Convert the image to grayscale.
  3. Detect and crop the face contained in the image.
  4. Locate the facial keypoints in the cropped image.
  5. Overlay the facial keypoints in the original (color, uncropped) image.

Note: step 4 can be the trickiest because remember your convolutional network is only trained to detect facial keypoints in $96 \times 96$ grayscale images where each pixel was normalized to lie in the interval $[0,1]$, and remember that each facial keypoint was normalized during training to the interval $[-1,1]$. This means - practically speaking - to paint detected keypoints onto a test face you need to perform this same pre-processing to your candidate face - that is after detecting it you should resize it to $96 \times 96$ and normalize its values before feeding it into your facial keypoint detector. To be shown correctly on the original image the output keypoints from your detector then need to be shifted and re-normalized from the interval $[-1,1]$ to the width and height of your detected face.

When complete you should be able to produce example images like the one below

In [23]:
# Load in color image for face detection
image = cv2.imread('images/obamas4.jpg')


# Convert the image to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)


# plot our image
fig = plt.figure(figsize = (9,9))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('image')
ax1.imshow(image)
Out[23]:
<matplotlib.image.AxesImage at 0x7f637e341128>
In [47]:
### TODO: Use the face detection code we saw in Section 1 with your trained conv-net 
## TODO : Paint the predicted keypoints on the test image

# Defining the paths:
img_pth = 'images/obamas4.jpg'
face_cascade_pth = 'detector_architectures/haarcascade_frontalface_default.xml'
model_pth = 'my_model.h5'

# Defining the parameters:    
scale = 1.2
neighbors = 4
key_sz = 8

# Detecting faces:
face_cascade=cv2.CascadeClassifier(face_cascade_pth) 
img = cv2.imread(img_pth)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, scale, neighbors)

# Plotting:
fig = plt.figure(figsize=(40, 40))
ax = fig.add_subplot(121, xticks=[], yticks=[])
ax.set_title('Image with facial keypoints')    
print('Number of faces detected:', len(faces))

# Make a copy of the orginal image to draw face detections on
image_detected = np.copy(img)

# Loading the model:
model = load_model(model_pth)

# Get the bounding box for each detected face
for (x,y,w,h) in faces:
    # Add a blue bounding box to the detections image
    cv2.rectangle(image_detected, (x,y), (x+w,y+h), (255,0,0), 3)
    bgr_crop = image_detected[y:y+h, x:x+w] 
    orig_shape_crop = bgr_crop.shape
    gray_crop = cv2.cvtColor(bgr_crop, cv2.COLOR_BGR2GRAY)
    resize_gray_crop = cv2.resize(gray_crop, (96, 96)) / 255
    # I received help from the forum for the following line of code:
    landmarks = np.squeeze(model.predict(np.expand_dims(np.expand_dims(resize_gray_crop, axis=-1), axis=0)))
    ax.scatter(((landmarks[0::2] * 48 + 48)*orig_shape_crop[0]/96)+x, 
               ((landmarks[1::2] * 48 + 48)*orig_shape_crop[1]/96)+y, 
               marker='o', c='y', s=key_sz)

# Plotting the final image:
ax.imshow(cv2.cvtColor(image_detected, cv2.COLOR_BGR2RGB))
Number of faces detected: 2
Out[47]:
<matplotlib.image.AxesImage at 0x7f6335573f98>